This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 09-29 05:44:23] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
[INFO 09-29 05:44:23] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 09-29 05:44:23] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
[INFO 09-29 05:44:23] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters.
[INFO 09-29 05:44:23] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
/home/runner/work/Ax/Ax/ax/core/observation.py:274: FutureWarning:
In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.397566, 'x2': 0.928812, 'x3': 0.498911, 'x4': 0.1659, 'x5': 0.519406, 'x6': 0.08105}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.573029, 0.1), 'l2norm': (1.228966, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.393171, 'x2': 0.547976, 'x3': 0.246245, 'x4': 0.120412, 'x5': 0.16062, 'x6': 0.743934}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.719535, 0.1), 'l2norm': (1.110119, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.635148, 'x2': 0.107315, 'x3': 0.701421, 'x4': 0.741077, 'x5': 0.989576, 'x6': 0.550025}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (-0.016531, 0.1), 'l2norm': (1.595582, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.278167, 'x2': 0.289269, 'x3': 0.298185, 'x4': 0.455416, 'x5': 0.2666, 'x6': 0.356455}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.986386, 0.1), 'l2norm': (0.841687, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.435224, 'x2': 0.633593, 'x3': 0.108418, 'x4': 0.410034, 'x5': 0.425966, 'x6': 0.425513}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.52678, 0.1), 'l2norm': (1.166687, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.495189, 'x2': 0.515663, 'x3': 0.51969, 'x4': 0.076997, 'x5': 0.01857, 'x6': 0.392037}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-0.136007, 0.1), 'l2norm': (0.777821, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.527386, 'x2': 0.581326, 'x3': 0.329155, 'x4': 0.0232, 'x5': 0.216258, 'x6': 0.193664}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-0.053128, 0.1), 'l2norm': (0.87714, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.649789, 'x2': 0.109086, 'x3': 0.089649, 'x4': 0.300904, 'x5': 0.347134, 'x6': 0.720396}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (-1.44296, 0.1), 'l2norm': (1.239052, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.985939, 'x2': 0.870984, 'x3': 0.992716, 'x4': 0.454748, 'x5': 0.882714, 'x6': 0.054888}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (0.025314, 0.1), 'l2norm': (1.970242, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.011527, 'x2': 0.296395, 'x3': 0.511582, 'x4': 0.449457, 'x5': 0.103982, 'x6': 0.125189}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (-0.045701, 0.1), 'l2norm': (0.94903, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.721269, 'x2': 0.156014, 'x3': 0.729335, 'x4': 0.995211, 'x5': 0.129141, 'x6': 0.051905}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.041809, 0.1), 'l2norm': (1.502707, 0.1)}.
[INFO 09-29 05:44:23] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.118625, 'x2': 0.033589, 'x3': 0.894741, 'x4': 0.84974, 'x5': 0.976442, 'x6': 0.991799}.
[INFO 09-29 05:44:23] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (0.037906, 0.1), 'l2norm': (1.934751, 0.1)}.
[INFO 09-29 05:44:35] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.508108, 'x2': 0.174664, 'x3': 0.165269, 'x4': 0.339559, 'x5': 0.30239, 'x6': 0.603227}.
[INFO 09-29 05:44:35] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-2.173777, 0.1), 'l2norm': (1.018017, 0.1)}.
[INFO 09-29 05:44:38] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.422302, 'x2': 0.162345, 'x3': 0.152374, 'x4': 0.366265, 'x5': 0.275816, 'x6': 0.625657}.
[INFO 09-29 05:44:38] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-2.351963, 0.1), 'l2norm': (1.079488, 0.1)}.
[INFO 09-29 05:45:06] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.389148, 'x2': 0.111041, 'x3': 0.142444, 'x4': 0.321023, 'x5': 0.253231, 'x6': 0.585495}.
[INFO 09-29 05:45:06] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-2.350205, 0.1), 'l2norm': (0.84776, 0.1)}.
[INFO 09-29 05:45:16] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.423242, 'x2': 0.095327, 'x3': 0.174933, 'x4': 0.405948, 'x5': 0.214759, 'x6': 0.615798}.
[INFO 09-29 05:45:16] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.979887, 0.1), 'l2norm': (0.829806, 0.1)}.
[INFO 09-29 05:45:20] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.361005, 'x2': 0.162354, 'x3': 0.118863, 'x4': 0.304426, 'x5': 0.317659, 'x6': 0.613925}.
[INFO 09-29 05:45:20] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-2.501719, 0.1), 'l2norm': (0.82402, 0.1)}.
[INFO 09-29 05:45:21] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.37423, 'x2': 0.187147, 'x3': 0.050666, 'x4': 0.311663, 'x5': 0.269823, 'x6': 0.588457}.
[INFO 09-29 05:45:21] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-2.024065, 0.1), 'l2norm': (0.857508, 0.1)}.
[INFO 09-29 05:45:23] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.338247, 'x2': 0.148991, 'x3': 0.166357, 'x4': 0.283756, 'x5': 0.346662, 'x6': 0.634947}.
[INFO 09-29 05:45:23] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-2.694264, 0.1), 'l2norm': (0.764609, 0.1)}.
[INFO 09-29 05:45:24] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.331539, 'x2': 0.109937, 'x3': 0.175128, 'x4': 0.314535, 'x5': 0.411992, 'x6': 0.613039}.
[INFO 09-29 05:45:24] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-2.193295, 0.1), 'l2norm': (0.993834, 0.1)}.
The plot below shows the response surface for hartmann6 metric as a function of the x1, x2 parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
[INFO 09-29 05:45:24] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
Total runtime of script: 1 minutes, 24.07 seconds.